Multivariate time series forecasting constitutes important functionality in cyber-physical systems, whose prediction accuracy can be improved significantly by capturing temporal and multivariate correlations among multiple time series. State-of-the-art deep learning methods fail to construct models for full time series because model complexity grows exponentially with time series length. Rather, these methods construct local temporal and multivariate correlations within subsequences, but fail to capture correlations among subsequences, which significantly affect their forecasting accuracy. To capture the temporal and multivariate correlations among subsequences, we design a pattern discovery model, that constructs correlations via diverse pattern functions. While the traditional pattern discovery method uses shared and fixed pattern functions that ignore the diversity across time series. We propose a novel pattern discovery method that can automatically capture diverse and complex time series patterns. We also propose a learnable correlation matrix, that enables the model to capture distinct correlations among multiple time series. Extensive experiments show that our model achieves state-of-the-art prediction accuracy.
translated by 谷歌翻译
Physics-Informed Neural Networks (PINNs) have recently been proposed to solve scientific and engineering problems, where physical laws are introduced into neural networks as prior knowledge. With the embedded physical laws, PINNs enable the estimation of critical parameters, which are unobservable via physical tools, through observable variables. For example, Power Electronic Converters (PECs) are essential building blocks for the green energy transition. PINNs have been applied to estimate the capacitance, which is unobservable during PEC operations, using current and voltage, which can be observed easily during operations. The estimated capacitance facilitates self-diagnostics of PECs. Existing PINNs are often manually designed, which is time-consuming and may lead to suboptimal performance due to a large number of design choices for neural network architectures and hyperparameters. In addition, PINNs are often deployed on different physical devices, e.g., PECs, with limited and varying resources. Therefore, it requires designing different PINN models under different resource constraints, making it an even more challenging task for manual design. To contend with the challenges, we propose Automated Physics-Informed Neural Networks (AutoPINN), a framework that enables the automated design of PINNs by combining AutoML and PINNs. Specifically, we first tailor a search space that allows finding high-accuracy PINNs for PEC internal parameter estimation. We then propose a resource-aware search strategy to explore the search space to find the best PINN model under different resource constraints. We experimentally demonstrate that AutoPINN is able to find more accurate PINN models than human-designed, state-of-the-art PINN models using fewer resources.
translated by 谷歌翻译
Sensors in cyber-physical systems often capture interconnected processes and thus emit correlated time series (CTS), the forecasting of which enables important applications. The key to successful CTS forecasting is to uncover the temporal dynamics of time series and the spatial correlations among time series. Deep learning-based solutions exhibit impressive performance at discerning these aspects. In particular, automated CTS forecasting, where the design of an optimal deep learning architecture is automated, enables forecasting accuracy that surpasses what has been achieved by manual approaches. However, automated CTS solutions remain in their infancy and are only able to find optimal architectures for predefined hyperparameters and scale poorly to large-scale CTS. To overcome these limitations, we propose SEARCH, a joint, scalable framework, to automatically devise effective CTS forecasting models. Specifically, we encode each candidate architecture and accompanying hyperparameters into a joint graph representation. We introduce an efficient Architecture-Hyperparameter Comparator (AHC) to rank all architecture-hyperparameter pairs, and we then further evaluate the top-ranked pairs to select a final result. Extensive experiments on six benchmark datasets demonstrate that SEARCH not only eliminates manual efforts but also is capable of better performance than manually designed and existing automatically designed CTS models. In addition, it shows excellent scalability to large CTS.
translated by 谷歌翻译
社会过程的持续数字化转化为时间序列数据的扩散,这些数据涵盖了诸如欺诈检测,入侵检测和能量管理等应用,在这种应用程序中,异常检测通常对于启用可靠性和安全性至关重要。许多最近的研究针对时间序列数据的异常检测。实际上,时间序列异常检测的特征是不同的数据,方法和评估策略,现有研究中的比较仅考虑了这种多样性的一部分,这使得很难为特定问题设置选择最佳方法。为了解决这一缺点,我们介绍了有关数据,方法和评估策略的分类法,并使用分类法提供了无监督时间序列检测的全面概述,并系统地评估和比较了最先进的传统以及深度学习技术。在使用九个公开可用数据集的实证研究中,我们将最常用的性能评估指标应用于公平实施标准下的典型方法。根据分类法提供的结构化,我们报告了经验研究,并以比较表的形式提供指南,以选择最适合特定应用程序设置的方法。最后,我们为这个动态领域提出了研究方向。
translated by 谷歌翻译
旨在找到合成靶分子的反应途径的循环合成计划在化学和药物发现中起着重要作用。此任务通常被建模为搜索问题。最近,数据驱动的方法吸引了许多研究兴趣,并显示了反递归计划的有希望的结果。我们观察到在搜索过程中多次访问了相同的中间分子,并且通常在先前基于树的方法(例如,或树搜索,蒙特卡洛树搜索)中独立处理。这样的裁员使搜索过程效率低下。我们提出了基于图的搜索策略,以消除任何中间分子的冗余探索。由于图形上的搜索比在树上更复杂,因此我们进一步采用图形神经网络来指导图形搜索。同时,我们的方法可以在图中搜索一批目标,并在基于树的搜索方法中删除目标间重复。两个数据集的实验结果证明了我们方法的有效性。尤其是在广泛使用的USPTO基准测试中,我们将搜索成功率提高到99.47%,以2.6分提高了先前的最新性能。
translated by 谷歌翻译
相关时间序列(CTS)预测在许多网络物理系统中起着重要作用,其中多个传感器发出捕获互连过程的时间序列。基于深度学习的解决方案,即提供最先进的CTS预测性能,采用各种时空(ST)块,能够在时间序列之间模拟时间依赖性和空间相关性。但是,仍然存在两个挑战。首先,ST-Blocks手动设计,这是耗时和昂贵的。其次,现有预测模型只需多次堆叠相同的ST块,这限制了模型潜力。为了解决这些挑战,我们提出了能够自动识别高竞争力的ST-Blocks以及使用不同拓扑连接的异构ST-Block的预测模型,而不是使用简单堆叠连接的相同的ST-Block。具体而言,我们设计微型和宏搜索空间,以模拟ST-Blocks的架构和异构ST-Block之间的连接,并且我们提供了一种能够共同探索搜索空间来识别最佳预测模型的搜索策略。关于八个常用CTS预测基准数据集的广泛实验可以证明我们的设计选择,并证明AutoCTS能够自动发现智能现有人设计型号的预测模型。这是“AutoCTS:自动相关时间序列预测”“的扩展版本,以显示在PVLDB 2022中。
translated by 谷歌翻译
随着社会,医疗,工业和科学过程的扫描数字化,正在部署传感技术,从而产生越来越多的时间序列数据,从而推动了一种新的新的或改进的应用。在此设置中,异常值检测通常很重要,而基于神经网络的解决方案存在,则它们会在精度和效率方面留出改进的空间。凭借实现这种改进的目的,我们提出了一个多样性驱动的卷积的集合。为了提高准确性,该合奏采用多个基本的异常值在卷积序列到序列自动泊车上构建的基本异常值检测模型,可以在时间序列中捕获时间依赖性。此外,一种新型的多样性驱动的训练方法在基本模型中保持多样性,目的是提高集合的准确性。为了提高效率,该方法在训练期间能够高度平行。此外,它能够将某些模型参数从一个基本模型转换为另一个基本模型,这减少了培训时间。我们使用现实世界多变量时间序列报告了广泛的实验,提供了对新方法的设计选择的深入了解,并提供了能够提高准确性和效率的证据。这是一个扩展版本的“无监督时间序列异常检测与分集驱动的卷积合奏”,以出现在PVLDB 2022中。
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
It has been observed in practice that applying pruning-at-initialization methods to neural networks and training the sparsified networks can not only retain the testing performance of the original dense models, but also sometimes even slightly boost the generalization performance. Theoretical understanding for such experimental observations are yet to be developed. This work makes the first attempt to study how different pruning fractions affect the model's gradient descent dynamics and generalization. Specifically, this work considers a classification task for overparameterized two-layer neural networks, where the network is randomly pruned according to different rates at the initialization. It is shown that as long as the pruning fraction is below a certain threshold, gradient descent can drive the training loss toward zero and the network exhibits good generalization performance. More surprisingly, the generalization bound gets better as the pruning fraction gets larger. To complement this positive result, this work further shows a negative result: there exists a large pruning fraction such that while gradient descent is still able to drive the training loss toward zero (by memorizing noise), the generalization performance is no better than random guessing. This further suggests that pruning can change the feature learning process, which leads to the performance drop of the pruned neural network. Up to our knowledge, this is the \textbf{first} generalization result for pruned neural networks, suggesting that pruning can improve the neural network's generalization.
translated by 谷歌翻译
Time-series anomaly detection is an important task and has been widely applied in the industry. Since manual data annotation is expensive and inefficient, most applications adopt unsupervised anomaly detection methods, but the results are usually sub-optimal and unsatisfactory to end customers. Weak supervision is a promising paradigm for obtaining considerable labels in a low-cost way, which enables the customers to label data by writing heuristic rules rather than annotating each instance individually. However, in the time-series domain, it is hard for people to write reasonable labeling functions as the time-series data is numerically continuous and difficult to be understood. In this paper, we propose a Label-Efficient Interactive Time-Series Anomaly Detection (LEIAD) system, which enables a user to improve the results of unsupervised anomaly detection by performing only a small amount of interactions with the system. To achieve this goal, the system integrates weak supervision and active learning collaboratively while generating labeling functions automatically using only a few labeled data. All of these techniques are complementary and can promote each other in a reinforced manner. We conduct experiments on three time-series anomaly detection datasets, demonstrating that the proposed system is superior to existing solutions in both weak supervision and active learning areas. Also, the system has been tested in a real scenario in industry to show its practicality.
translated by 谷歌翻译